Frequency: Quarterly E- ISSN: 2230-8121 P- ISSN: 2249-1295 Abstracted/ Indexed in: Ulrich's International Periodical Directory, Google Scholar, SCIRUS, Genamics JournalSeek, EBSCO Information Services
Quarterly published in print and online "Inventi Impact: Biomedical Engineering" publishes high quality unpublished as well as high impact pre-published research and reviews catering to the needs of researchers and professionals. This multidisciplinary journal covers all recent advances in the field of biomedical technology, instrumentation, and administration. Papers are invited focusing on theoretical and practical problems associated with development of medical technology; introduction of new engineering methods into public health, hospitals and patient care, improvement of diagnosis and therapy, biomedical information storage and retrieval etc.
Background: Nano-photothermal therapy (NPTT) has gained wide attention in cancer treatment due to its high efficiency and selective treatment strategy. The biggest challenges in the clinical application are the lack of (i) a reliable platform for mapping the thermal dose and (ii) efficient photothermal agents (PTAs). This study developed a 3D treatment planning for NPTT to reduce the uncertainty of treatment procedures, based on our synthesized nanohybrid. Methods: This study aimed to develop a three-dimensional finite element method (FEM) model for in vivo NPTT in mice using magneto-plasmonic nanohybrids, which are complex assemblies of superparamagnetic iron oxide nanoparticles and gold nanorods. The model was based on Pennes’ bio-heat equation and utilized a geometrically correct mice whole-body. CT26 colon tumor-bearing BALB/c mice were injected with nanohybrids and imaged using MRI (3 Tesla) before and after injection. MR images were segmented, and STereoLithography (STL) files of mice bodies and nanohybrid distribution in the tumor were established to create a realistic geometry for the model. The accuracy of the temperature predictions was validated by using an infrared (IR) camera. Results: The photothermal conversion efficiency of the nanohybrids was experimentally determined to be approximately 30%. The intratumoral (IT) injection group showed the highest temperature increase, with a maximum of 17 °C observed at the hottest point on the surface of the tumor-bearing mice for 300 s of laser exposure at a power density of 1.4 W/cm2. Furthermore, the highest level of tissue damage, with a maximum value of Ω = 0.4, was observed in the IT injection group, as determined through a simulation study. Conclusions: Our synthesized nanohybrid shows potential as an effective agent for MRI-guided NPTT. The developed model accurately predicted temperature distributions and tissue damage in the tumor. However, the current temperature validation method, which relies on limited 2D measurements, may be too lenient. Further refinement is necessary to improve validation. Nevertheless, the presented FEM model holds great promise for clinical NPTT treatment planning....
Background: Intensity in homogeneity occurs in many medical images, especially in\nvessel images. Overcoming the difficulty due to image in homogeneity is crucial for the\nsegmentation of vessel image.\nMethods: This paper proposes a localized hybrid level-set method for the\nsegmentation of 3D vessel image. The proposed method integrates both local region\ninformation and boundary information for vessel segmentation, which is essential for\nthe accurate extraction of tiny vessel structures. The local intensity information is firstly\nembedded into a region-based contour model, and then incorporated into the\nlevel-set formulation of the geodesic active contour model. Compared with the preset\nglobal threshold based method, the use of automatically calculated local thresholds\nenables the extraction of the local image information, which is essential for the\nsegmentation of vessel images.\nResults: Experiments carried out on the segmentation of 3D vessel images\ndemonstrate the strengths of using locally specified dynamic thresholds in our level-set\nmethod. Furthermore, both qualitative comparison and quantitative validations have\nbeen performed to evaluate the effectiveness of our proposed model.\nConclusions: Experimental results and validations demonstrate that our proposed\nmodel can achieve more promising segmentation results than the original hybrid\nmethod does....
Image segmentation is an important task involved in different areas from image processing to image analysis.\r\nOne of the simplest methods for image segmentation is thresholding. However, many thresholding methods are\r\nbased on a bi-level thresholding procedure. These methods can be extended to form multi-level thresholding, but they\r\nbecome computationally expensive because a large number of iterations would be required for computing the optimum\r\nthreshold values. In order to overcome this disadvantage, a new method based on a Shrinking Search Space (3S)\r\nalgorithm is proposed in this paper. The method is applied on statistical bi-level thresholding approaches including\r\nEntropy, Cross-entropy, Covariance, and Divergent Based Thresholding (DBT), to achieve multi-level thresholding and\r\nused for intracranial segmentation from brain MRI images. The paper demonstrates that the impact of the proposed\r\n3S technique on the DBT method is more significant than the other bi-level thresholding approaches. Comparing the\r\nresults of using the proposed approach against those of the Fuzzy C-Means (FCM) clustering method demonstrates\r\na better segmentation performance by improving the similarity index from 0.58 in FCM to 0.68 in the 3S method. Also,\r\nthis method has a lower computation complexity of around 0.37s with respect to 157s processing time in FCM. In\r\naddition, the FCM approach does not always guarantee the convergence, whilst the 3S technique always converges\r\nto the optimum res....
Background: Currently there are no standard models with which to evaluate the biomechanical performance of\ncalcified tissue adhesives, in vivo. We present, herein, a pre-clinical murine distal femoral bone model for evaluating\ntissue adhesives intended for use in both osseous and osteochondral tissue reconstruction.................................
Background: The morphology of the adrenal tumor and the clinical statistics of the adrenal tumor area are two crucial diagnostic and differential diagnostic features, indicating precise tumor segmentation is essential. Therefore, we build a CT image segmentation method based on an encoder–decoder structure combined with a Transformer for volumetric segmentation of adrenal tumors. Methods: This study included a total of 182 patients with adrenal metastases, and an adrenal tumor volumetric segmentation method combining encoder–decoder structure and Transformer was constructed. The Dice Score coefficient (DSC), Hausdorff distance, Intersection over union (IOU), Average surface distance (ASD) and Mean average error (MAE) were calculated to evaluate the performance of the segmentation method. Results: Analyses were made among our proposed method and other CNN-based and transformer-based methods. The results showed excellent segmentation performance, with a mean DSC of 0.858, a mean Hausdorff distance of 10.996, a mean IOU of 0.814, a mean MAE of 0.0005, and a mean ASD of 0.509. The boxplot of all test samples’ segmentation performance implies that the proposed method has the lowest skewness and the highest average prediction performance. Conclusions: Our proposed method can directly generate 3D lesion maps and showed excellent segmentation performance. The comparison of segmentation metrics and visualization results showed that our proposed method performed very well in the segmentation....
In metabolomics data, like other -omics data, normalization is an important\npart of the data processing. The goal of normalization is to reduce the variation\nfrom non-biological sources (such as instrument batch effects), while\nmaintaining the biological variation. Many normalization techniques make\nadjustments to each sample. One common method is to adjust each sample\nby its Total Ion Current (TIC), i.e. for each feature in the sample, divide its\nintensity value by the total for the sample. Because many of the assumptions\nof these methods are dubious in metabolomics data sets, we compare these\nmethods to two methods that make adjustments separately for each metabolite,\nrather than for each sample. These two methods are the following: 1) for\neach metabolite, divide its value by the median level in bridge samples\n(BRDG); 2) for each metabolite divide its value by the median across the experimental\nsamples (MED). These methods were assessed by comparing the\ncorrelation of the normalized values to the values from targeted assays for a\nsubset of metabolites in a large human plasma data set. The BRDG and MED\nnormalization techniques greatly outperformed the other methods, which often\nperformed worse than performing no normalization at all....
Background: Flash glucose monitoring systems like the FreeStyle Libre (FSL) sensor have gained popularity for monitoring glucose levels in people with diabetes mellitus. This sensor can be paired with an off-label converted real-time continuous glucose monitor (c-rtCGM) plus an ad hoc computer/smartphone interface for remote real-time monitoring of diabetic subjects, allowing for trend analysis and alarm generation. Objectives: This work evaluates the accuracy and agreement between the FSL sensor and the developed c-rtCGM system. As real-time monitoring is the main feature, the system’s connectivity was assessed at 5-min intervals during the trials. Methods: One week of glucose data were collected from 16 type 1 diabetic rats using the FSL sensor and the c-rtCGM. Baseline blood samples were taken the first day before inducing type 1 diabetes with streptozotocin. Once confirmed diabetic rats, FSL and c-rtCGM, were implanted, and to improve data matching between the two monitoring devices, the c-rtCGM was calibrated to the FSL glucometer readings. A factorial design 2 × 3^3 and a second-order regression was used to find the base values of the linear model transformation of the raw data obtained from the sensor. Accuracy, agreement, and connectivity were assessed by median absolute relative difference (Median ARD), range averaging times, Parkes consensus error grid analysis (EGA), and Bland–Altman analysis with a non-parametric approach. Results: Compared to the FSL sensor, the c-rtCGM had an overall Median ARD of 6.58%, with 93.06% of results in zone A when calibration was not carried out. When calibration frequency changed from every 50 h to 1 h, the overall Median ARD improved from 6.68% to 2.41%, respectively. The connectivity evaluation showed that 95% of data was successfully received every 5 min by the computer interface. Conclusions and clinical importance: The results demonstrate the feasibility and reliability of real-time and remote subjects with diabetes monitoring using the developed c-rtCGM system. Performing calibrations relative to the FSL readings increases the accuracy of the data displayed at the interface....
Background\r\nA new framework for heart sound analysis is proposed. One of the most difficult processes in heart sound analysis is segmentation, due to interference form murmurs.\r\nMethod\r\nEqual number of cardiac cycles were extracted from heart sounds with different heart rates using information from envelopes of autocorrelation functions without the need to label individual fundamental heart sounds (FHS). The complete method consists of envelope detection, calculation of cardiac cycle lengths using auto-correlation of envelope signals, features extraction using discrete wavelet transform, principal component analysis, and classification using neural network bagging predictors.\r\nResult\r\nThe proposed method was tested on a set of heart sounds obtained from several on-line databases and recorded with an electronic stethoscope. Geometric mean was used as performance index. Average classification performance using ten-fold cross-validation was 0.92 for noise free case, 0.90 under white noise with 10 dB signal-to-noise ratio (SNR), and 0.90 under impulse noise up to 0.3 s duration.\r\nConclusion\r\nThe proposed method showed promising results and high noise robustness to a wide range of heart sounds. However, more tests are needed to address any bias that may have been introduced by different sources of heart sounds in the current training set, and to concretely validate the method. Further work include building a new training set recorded from actual patients, then further evaluate the method based on this new training set....
The stage of a tumor is sometimes hard to predict, especially early in its development. The size and complexity of its observations are the major problems that lead to false diagnoses. Even experienced doctors can make a mistake in causing terrible consequences for the patient. We propose a mathematical tool for the diagnosis of breast cancer. The aim is to help specialists in making a decision on the likelihood of a patient’s condition knowing the series of observations available. This may increase the patient’s chances of recovery. With a multivariate observational hidden Markov model, we describe the evolution of the disease by taking the geometric properties of the tumor as observable variables. The latent variable corresponds to the type of tumor: malignant or benign. The analysis of the covariance matrix makes it possible to delineate the zones of occurrence for each group belonging to a type of tumors. It is therefore possible to summarize the properties that characterize each of the tumor categories using the parameters of the model. These parameters highlight the differences between the types of tumors....
Background: The pupillary light reflex characterizes the direct and consensual response\nof the eye to the perceived brightness of a stimulus. It has been used as indicator of\nboth neurological and optic nerve pathologies. As with other eye reflexes, this reflex\nconstitutes an almost instantaneous movement and is linked to activation of the same\nmidbrain area. The latency of the pupillary light reflex is around 200 ms, although the\nliterature also indicates that the fastest eye reflexes last 20 ms. Therefore, a system with\nsufficiently high spatial and temporal resolutions is required for accurate assessment. In\nthis study, we analyzed the pupillary light reflex to determine whether any small\ndiscrepancy exists between the direct and consensual responses, and to ascertain\nwhether any other eye reflex occurs before the pupillary light reflex.\nMethods: We constructed a binocular video-oculography system two high-speed\ncameras that simultaneously focused on both eyes. This was then employed to assess\nthe direct and consensual responses of each eye using our own algorithm based on\nCircular Hough Transform to detect and track the pupil. Time parameters describing\nthe pupillary light reflex were obtained from the radius time-variation. Eight healthy\nsubjects (4 women, 4 men, aged 24ââ?¬â??45) participated in this experiment.\nResults: Our system, which has a resolution of 15 microns and 4 ms, obtained time\nparameters describing the pupillary light reflex that were similar to those reported\nin previous studies, with no significant differences between direct and consensual\nreflexes. Moreover, it revealed an incomplete reflex blink and an upward eye\nmovement at around 100 ms that may correspond to Bellââ?¬â?¢s phenomenon.\nConclusions: Direct and consensual pupillary responses do not any significant\ntemporal differences. The system and method described here could prove useful\nfor further assessment of pupillary and blink reflexes. The resolution obtained revealed\nthe existence reported here of an early incomplete blink and an upward eye movement....
Loading....